University of Texas at Austin

Upcoming Event: CosmicAI Seminar

1. Cosmo3DFlow: Powering the Digital Twin of the Universe | 2. LLM Reasoning Beyond Scaling

1. Judy Fox | 2. Greg Durrett, 1. Associate Professor, University of Virginia | 2. Associate Professor, NYU

11 – 12PM
Wednesday Mar 4, 2026

Room 201 in UVA Astronomy Building & Zoom

Abstract

Speaker 1: Reconstructing the early universe from the evolved present-day cosmic web is a formidable computational challenge. Traditional 3D grid simulations suffer from the "void problem" because cosmic voids occupy approximately 64% of the universe's volume but contain only 16% of its mass, and uniform voxel representations waste billions of floating-point operations processing essentially empty space.

To address these dimensionality and sparsity bottlenecks, we introduce Cosmo3DFlow, a novel generative artificial intelligence framework that powers a practical digital twin of the universe. Cosmo3DFlow overcomes existing simulation limitations through two key innovations: 1) Spatial-to-Spectral Compression: By integrating a 3D Discrete Wavelet Transform (DWT), we address the void problem by translating spatial emptiness into spectral sparsity. This achieves an 8x spatial compression by decoupling high-frequency details from low-frequency structures, ensuring computational density is concentrated solely on mass-dense filaments and halos. 2) Wavelet Flow Matching: Unlike stochastic diffusion models, we formulate generation as solving a deterministic ordinary differential equation (ODE) using flow matching in the wavelet domain. This yields stable, wavelet-space velocity fields that allow ODE solvers to utilize significantly larger step sizes.

At 1283 resolutions, Cosmo3DFlow achieves up to a 50x sampling speedup over state-of-the-art diffusion models. By combining a 10x reduction in integration steps with a 5x lower per-step computational cost, Cosmo3DFlow enables cosmological initial conditions to be sampled in seconds rather than minutes, maintaining rigorous physical accuracy and paving the way for next-generation astrophysical inference.

Speaker 2: Large reasoning models have demonstrated capabilities to solve competition-level math problems, answer “deep research” questions, and address complex coding needs. Much of this progress has been enabled by scaling of data: pre-training data to learn vast knowledge, fine-tuning data to learn natural language reasoning, and RL environments to refine that reasoning. In this talk, I will describe the current LLM reasoning paradigm, its boundaries, and the future of LLM reasoning beyond scaling. First, I will describe the state of reasoning models and where I think scaling can lead to some additional (though perhaps limited) successes. I will then shift to discussing more fundamental issues with models that scale will not resolve in the next few years. I will touch on current limitations for long-running AI agents like Claude Code and where I see this technology progressing in the near future.

Biography

Speaker 1: Judy Fox is an Associate Professor in the School of Data Science at the University of Virginia. Her expertise spans big data analytics and computer systems, with research focused on innovative AI systems and real-time machine learning for interdisciplinary applications, including biomedical science, graph and network science, and astronomy. She led an Intel Parallel Computing Center (IPCC) site. Her research has been supported by the National Science Foundation (NSF), National Institutes of Health (NIH), Intel, Microsoft, and Google. Dr. Fox received her Ph.D. in Computer Science from Syracuse University in 2005 with the Outstanding Graduate Student Award. She is also a recipient of the NSF CAREER Award and most recently served as Director of the Ph.D. Program in Data Science at the University of Virginia’s School of Data Science

Speaker 2: Greg Durrett is an associate professor in the Department of Computer Science and the Center for Data Science at New York University. His research is broadly in the areas of natural language processing and machine learning. Currently, his group's focus is on reasoning about knowledge in text, verifying correctness of generation methods, and studying how to make progress on problems that defy LLM scaling. He is a 2023 Sloan Research Fellow and a recipient of a 2022 NSF CAREER award. He received his BS in Computer Science and Mathematics from MIT and his PhD in Computer Science from UC Berkeley, where he was advised by Dan Klein.

1. Cosmo3DFlow: Powering the Digital Twin of the Universe | 2. LLM Reasoning Beyond Scaling

Event information

Date
11 – 12PM
Wednesday Mar 4, 2026
Hosted by